- description: this article focuses on the quick handling process and practical commands when a vps using a cn2 line in a us computer room encounters heavy traffic or application layer attacks.
- goal: quickly restore availability, minimize business interruption, locate the source of attacks and provide subsequent protection.
- syn/udp/icmp flooding: network layer bandwidth and connections are exhausted.
- application layer http flood: the requests appear to be normal but the volume is large, causing nginx/apache cpu/memory exhaustion.
- ssh/ftp brute force cracking: a large number of login attempts lead to authentication failure and resource consumption.
- amplification attack (ntp/dns): source address forgery, large number of packet returns.
1) log in to the vps console (if ssh is not available, use the hosting provider's web console).
2) view real-time network traffic: sudo iftop -i eth0 or sudo nload eth0.
3) check the connection status: sudo ss -tanp | head -n 50 or netstat -anp | grep estab.
4) if the traffic is abnormally high, immediately temporarily enable the traffic limit or drop policy (see step 6).
- capture traffic sample: sudo tcpdump -nn -s 96 -c 200 -w /tmp/attack.pcap.
- statistics source ip: sudo tcpdump -nn -r /tmp/attack.pcap | awk '{print $3}' | cut -d. -f1-4 | sort | uniq -c | sort -nr | head.
- check the system load and processes: top or htop; check the occupied port processes: sudo lsof -i :80 or sudo ss -lptn.
- block single ip: sudo iptables -i input -s 1.2.3.4 -j drop.
- block the ip segment: sudo iptables -i input -s 203.0.113.0/24 -j drop.
- use conntrack to clear a large number of connections: sudo apt-get install -y conntrack && sudo conntrack -d -s 1.2.3.4.
- if the machine supports nftables: sudo nft add rule inet filter input ip saddr 1.2.3.4 drop.
- native speed limit example (reduce tcp traffic to 100mbps): sudo tc qdisc add dev eth0 root handle 1: htb default 12; sudo tc class add dev eth0 parent 1: classid 1:1 htb rate 100mbit.
- black hole (propose to upstream if available): contact the computer room/backbone provider for bgp black hole or traffic cleaning (provide target ip and time window).
- enable speed limit: add limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s to nginx; use limit_req zone=one burst=20 in location.
- static resources use caching and cdn (cloudflare/alibaba cloud cdn) to divert traffic.
- dynamically request to add verification code or waf policy, enable mod_security or use cloud waf.
- modify the default port and disable password login: edit /etc/ssh/sshd_config, set port 2222, passwordauthentication no, and restart sudo systemctl restart sshd.
- install fail2ban: sudo apt install -y fail2ban, create /etc/fail2ban/jail.local to limit login/request frequency for sshd and nginx.
- use public key authentication and limit the allowed login users (allowusers user).
- save tcpdump files and system logs (/var/log/syslog, /var/log/nginx/access.log).
- use tools for analysis: tshark, bro/zeek to analyze pcap; count suspicious ips and export them as blocklist.
- provided to upstream or security vendors: including timestamp, target ip, pcap sample, attack type description.
- recovery steps: 1) gradually relax the temporary rules and observe; 2) add the confirmed attack ip to the blacklist and write it into the firewall configuration; 3) configure long-term waf and cdn; 4) establish monitoring alarms (prometheus+alertmanager or cloud monitoring).
- normalization: regularly update the system, enable automated backup, and write emergency scripts (block ips, collect logs).
answer: local protection (iptables, tc, speed limiting) can mitigate small-scale attacks in a short period of time. however, when the attack bandwidth exceeds the vps/computer room upstream or affects the same computer room resources, the upstream operator must be contacted or cloud cleaning/cdn is used for cleaning and bgp black holes. it is difficult for a single machine to withstand large traffic for a long time.
answer: check the total bandwidth of iftop/nload and the number of connections in ss/netstat. those with high bandwidth and mostly udp/icmp are usually the network layer; those with low bandwidth but a large number of tcp short connections or a large number of http 200 requests and cpu surge are usually the application layer. combining tcpdump packet capture can further confirm.

answer: you can use a bash script to extract high-frequency ips from pcap or logs and add them to iptables in batches, for example: sudo awk '{print $1}' /var/log/nginx/access.log | sort | uniq -c | sort -nr | head -n 200 | awk '{print $2}' | xargs -i{} sudo iptables -i input -s {} -j drop. please review the production environment first and execute it at a limited speed to prevent accidental injuries.
- Latest articles
- A Quick Tutorial On Setting Up A Demo Environment And Using Vps Hong Kong Hosting Free Plan
- The Official Website Of Cera In The United States Does Not Have The Potential Impact Of Cn2 On The Access Experience Of Global Users.
- Evaluation Of The Stability Of Malaysian Vps With Unlimited Traffic Under Long-term High Concurrency Environment
- How To Verify The Validity And Usage Restrictions Of Singapore Vps Vouchers
- Compare The Differences Between Cloud Vendors To Help You Decide On The Cost Performance And Services For Renting A Vps Host In The United States
- Amazon Japan Site Group Revenue Model Decomposition Pricing Strategy And Promotion Ratio Suggestions
- Practical Guide To Taiwan's Three-network Direct-connect Vps Line Selection And Load Balancing Configuration
- Full Analysis Of The Actual Performance And Optimization Suggestions Of Cn2 Malaysia Lines In Cross-border Acceleration
- Hong Kong Native Residential Ip Compliance Risks And Operator Certification Requirements
- Taiwan Native Ip Odin Solution Adaptability Evaluation And Performance Test Report
- Popular tags
-
Cloud Vendor Comparison Report Shows That Whether The Us Cn2 Server Is Fast Is Not Determined By A Single Factor
evaluating whether a us cn2 server is "fast" requires considering a number of factors: cloud vendor network strategy, routing selection, submarine cable and node layout, testing methods and optimization strategies. this article explains why speed is not determined by a single factor through 5 common questions. -
Common Troubleshooting Methods Solutions For Cn2 Us Address Connection Abnormality
common troubleshooting and solutions for cn2 us address connection abnormalities, covering practical steps such as routing diagnosis, mtu/mss adjustment, bgp and peering check, dns and firewall configuration, cdn and ddos protection, and recommending dexun telecommunications as an optional service provider. -
Discuss The Actual Effect Of Cn2 Optimization In The United States
this article discusses the actual effects of cn2 optimization in the united states in detail, and provides detailed operation steps and guidelines.